7 research outputs found

    A multiple optical tracking based approach for enhancing hand-based interaction in virtual reality simulations

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Research exploring natural virtual reality interaction has seen significant success in optical tracker-based approaches, enabling users to freely interact using their hands. Optical based trackers can provide users with real-time, high-fidelity virtual hand representations for natural interaction and an immersive experience. However, work in this area has identified four issues: occlusion, field-of-view, stability and accuracy. To overcome the four key issues, researchers have investigated approaches such as using multiple sensors. Research has shown multi-sensor-based approaches to be effective in improving recognition accuracy. However, such approaches typically use statically positioned sensors, which introduce body occlusion issues that make tracking hands challenging. Machine learning approaches have also been explored to improve gesture recognition. However, such approaches typically require a pre-set gesture vocabulary limiting user actions with larger vocabularies hindering real-time performance. This thesis presents an optical hand-based interaction system that comprises two Leap Motion sensors mounted onto a VR headset at different orientations. Novel approaches to the aggregation and validation of sensor data are presented. A machine learning sub-system is developed to validate hand data received by the sensors. Occlusion detection, stability detection, inferred hands and a hand interpolation sub-system are also developed to ensure that valid hand representations are always shown to the user. In addition, a mesh conformation sub-system ensures 3D objects are appropriately held in a user’s virtual hand. The presented system addresses the four key issues of optical sessions to provide a smooth and consistent user experience. The MOT system is evaluated against traditional interaction approaches; gloves, motion controllers and a single front-facing sensor configuration. The comparative sensor evaluation analysed the validity and availability of tracking data, along with each sensors effect on the MOT system. The results show the MOT provides a more stable experience than the front-facing configuration and produces significantly more valid tracking data. The results also demonstrated the effectiveness of a 45-degree sensor configuration in comparison to a front-facing. Furthermore, the results demonstrated the effectiveness of the MOT systems solutions at handling the four key issues with optical trackers

    Interactive Reading Using Low Cost Brain Computer Interfaces

    Get PDF
    This work shows the feasibility for document reader user applications using a consumer grade non-invasive BCI headset. Although Brain Computer Interface (BCI) type devices are beginning to aim at the consumer level, the level at which they can actually detect brain activity is limited. There is however progress achieved in allowing for interaction between a human and a computer when this interaction is limited to around 2 actions. We employed the Emotiv Epoc, a low-priced BCI headset, to design and build a proof-of-concept document reader system that allows users to navigate the document using this low cast BCI device. Our prototype has been implemented and evaluated with 12 participants who were trained to navigate documents using signals acquired by Emotive Epoc

    Investigating control of virtual reality snowboarding simulator using a Wii FiT board

    No full text
    This work presents a virtual reality snowboarding application which uses a Nintendo Wii balance board for richer interaction modalities. We present the application and test the prototype with 7 participants to investigate immersion, enjoyability and to an extent performance. The outcomes from the study will be used to start forming research directions and questions to indicate likely research and development directions for future research

    TTracker: Using finger detection to improve touch typing training

    No full text
    Touch typing software teaches a user to use the correct finger combinations with the correct keyboard buttons. The ultimate goal is to teach the typist to type faster, more accurately and ergonomically correct. Our research presents the working prototype of a software and hardware setup that tracks not only the speed and accuracy of the correct buttons being pressed but also which fingers are used to press them; a dimension of training that has previously not been integrated into touch typing tutorials. We use novel technology (leap motion) to detect the accurate interaction between the user and the keyboard, giving precise feedback to the user in order for him or her to improve

    Dual-mode user interfaces for web based interactive 3D virtual environments using three.js

    No full text
    3D objects are now being embedded within HTML pages without the need for additional software, such as browser plug-ins. However, 2D and 3D web content are still typically treated as separate entities with limited interaction. Our research presents a working prototype implementation of a dual-mode user interface for interactive 3D environments. The developed interface allows the user to instantly switch between a traditional hypertext interface and an immersive 3D environment that incorporates 2D HTML elements. The results from an initial user study show that 2D and dual-mode interfaces allow for quicker retrieval of information than 3D websites alone and result in higher user satisfaction

    Presenting and Investigating the Efficacy of an Educational Interactive Mobile Application for British Sign Language Using Hand Gesture Detection Techniques

    No full text
    In this paper we present the design and development of a mobile application that assists learners of British Sign Language. The application uses interaction; namely, image recognition of a participant's gestures to give feedback as to the correctness of the gestures. Through designing with different algorithms and a user test, the efficacy of such an application is presented along with limitations that are present with current technology
    corecore